GRADIENT-BASED STOCHASTIC OPTIMIZATION METHODS IN BAYESIAN EXPERIMENTAL DESIGN
نویسندگان
چکیده
منابع مشابه
Gradient-based stochastic optimization methods in Bayesian experimental design
Optimal experimental design (OED) seeks experiments expected to yield the most useful data for some purpose. In practical circumstances where experiments are time-consuming or resource-intensive, OED can yield enormous savings. We pursue OED for nonlinear systems from a Bayesian perspective, with the goal of choosing experiments that are optimal for parameter inference. Our objective in this co...
متن کاملDiscretization-free Knowledge Gradient Methods for Bayesian Optimization
This paper studies Bayesian ranking and selection (R&S) problems with correlated prior beliefs and continuous domains, i.e. Bayesian optimization (BO). Knowledge gradient methods [Frazier et al., 2008, 2009] have been widely studied for discrete R&S problems, which sample the one-step Bayes-optimal point. When used over continuous domains, previous work on the knowledge gradient [Scott et al., ...
متن کاملStochastic gradient methods for the optimization of water supply systems
Reductions of water deficits for users and energy savings are frequently conflicting issues when optimizing largescale multi-reservoir and multi-user water supply systems. Undoubtedly, a high level of uncertainty due to hydrologic input variability and water demand behaviour characterizes these problems. The aim of this paper is to provide a decision support for the water system authority, in o...
متن کاملAccelerated Gradient Methods for Stochastic Optimization and Online Learning
Regularized risk minimization often involves non-smooth optimization, either because of the loss function (e.g., hinge loss) or the regularizer (e.g., l1-regularizer). Gradient methods, though highly scalable and easy to implement, are known to converge slowly. In this paper, we develop a novel accelerated gradient method for stochastic optimization while still preserving their computational si...
متن کاملConditional gradient type methods for composite nonlinear and stochastic optimization
In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a strongly convex term. While including this strongly convex term in the subproblems of the classical conditional gradient (CG) method improves its convergence rate for solving strongly convex problems, it d...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal for Uncertainty Quantification
سال: 2014
ISSN: 2152-5080
DOI: 10.1615/int.j.uncertaintyquantification.2014006730